4 research outputs found

    DeepJoin: Learning a Joint Occupancy, Signed Distance, and Normal Field Function for Shape Repair

    Full text link
    We introduce DeepJoin, an automated approach to generate high-resolution repairs for fractured shapes using deep neural networks. Existing approaches to perform automated shape repair operate exclusively on symmetric objects, require a complete proxy shape, or predict restoration shapes using low-resolution voxels which are too coarse for physical repair. We generate a high-resolution restoration shape by inferring a corresponding complete shape and a break surface from an input fractured shape. We present a novel implicit shape representation for fractured shape repair that combines the occupancy function, signed distance function, and normal field. We demonstrate repairs using our approach for synthetically fractured objects from ShapeNet, 3D scans from the Google Scanned Objects dataset, objects in the style of ancient Greek pottery from the QP Cultural Heritage dataset, and real fractured objects. We outperform three baseline approaches in terms of chamfer distance and normal consistency. Unlike existing approaches and restorations using subtraction, DeepJoin restorations do not exhibit surface artifacts and join closely to the fractured region of the fractured shape. Our code is available at: https://github.com/Terascale-All-sensing-Research-Studio/DeepJoin.Comment: To be published at SIGGRAPH Asia 2022 (Journal

    Pix2Repair: Implicit Shape Restoration from Images

    Full text link
    We present Pix2Repair, an automated shape repair approach that generates restoration shapes from images to repair fractured objects. Prior repair approaches require a high-resolution watertight 3D mesh of the fractured object as input. Input 3D meshes must be obtained using expensive 3D scanners, and scanned meshes require manual cleanup, limiting accessibility and scalability. Pix2Repair takes an image of the fractured object as input and automatically generates a 3D printable restoration shape. We contribute a novel shape function that deconstructs a latent code representing the fractured object into a complete shape and a break surface. We show restorations for synthetic fractures from the Geometric Breaks and Breaking Bad datasets, and cultural heritage objects from the QP dataset, and for real fractures from the Fantastic Breaks dataset. We overcome challenges in restoring axially symmetric objects by predicting view-centered restorations. Our approach outperforms shape completion approaches adapted for shape repair in terms of chamfer distance, earth mover's distance, normal consistency, and percent restorations generated

    Fantastic Breaks: A Dataset of Paired 3D Scans of Real-World Broken Objects and Their Complete Counterparts

    Full text link
    Automated shape repair approaches currently lack access to datasets that describe real-world damaged geometry. We present Fantastic Breaks (and Where to Find Them: https://terascale-all-sensing-research-studio.github.io/FantasticBreaks), a dataset containing scanned, waterproofed, and cleaned 3D meshes for 150 broken objects, paired and geometrically aligned with complete counterparts. Fantastic Breaks contains class and material labels, proxy repair parts that join to broken meshes to generate complete meshes, and manually annotated fracture boundaries. Through a detailed analysis of fracture geometry, we reveal differences between Fantastic Breaks and synthetic fracture datasets generated using geometric and physics-based methods. We show experimental shape repair evaluation with Fantastic Breaks using multiple learning-based approaches pre-trained with synthetic datasets and re-trained with subset of Fantastic Breaks.Comment: To be published at CVPR 202

    3D Manipulation of Objects in Photographs

    No full text
    <p>This thesis describes a system that allows users to to perform full three-dimensional manipulations to objects in photographs. Cameras and photo-editing tools have contributed to the explosion in creative content by democratizing the process of creating visual realizations of users’ imaginations. However, shooting photographs using a camera is constrained by real-world physics, while existing photo-editing software is largely restricted to the 2D plane of the image. 3D object edits, intuitive to humans, are simply not possible in photo-editing software. The fundamental challenge in providing 3D object manipulation is that estimating the 3D structure of the object, including the geometry and appearance of object parts hidden from the viewpoint of the camera is ill-posed. 3D object manipulations reveal hidden parts of objects that were not previously seen from the viewpoint of the camera. The key contributions of this thesis are algorithms that leverage 3D models from public repositories to obtain a three-dimensional representation of objects in photographs for 3D manipulation with seamless transition in appearance of the object from the original photograph. 3D models of objects in online repositories cannot be directly used to manipulate photographed objects, as they show mismatches in geometry and appearance, and do not contain three-dimensional illumination representing the scene where the photograph was captured. The work in this thesis provides a system that align the 3D model geometry, estimates three-dimensional illumination, and completes the appearance over the object in three dimensions to provide full 3D manipulation. To correct the mismatch between the geometry of the 3D model and the photographed object, the thesis presents an automatic model alignment technique that performs an exhaustive search in the space of viewpoint, object location, scale, and non-rigid deformation. We also provide a manual geometry adjustment tool that allows users to perform final corrections while imposing smoothness and symmetry constraints. Given the matched geometry, we present an illumination estimation approach that uses the visible pixels to obtain three-dimensional environment illumination that produces plausible effects such as cast shadows and smooth surface shading. Our appearance completion approach relates visible parts of the object to hidden parts using symmetries over the publicly available 3D model. Our interactive system for editing photographs re-imagines typical photo-editing operations such as rotation, translation, copy-paste, scaling, and deformation as 3D manipulations to objects. Using our system, users have created a variety of manipulations to photographs, such as flipping cars, making dynamic compositions of multiple objects suspended in the air, performing animations, and altering the stories of historical images and personal photographs.</p
    corecore